Escaping the curse of dimensionality in similarity learning: Efficient Frank-Wolfe algorithm and generalization bounds
نویسندگان
چکیده
منابع مشابه
A Distributed Frank-Wolfe Algorithm for Communication-Efficient Sparse Learning
Learning sparse combinations is a frequent theme in machine learning. In this paper, we study its associated optimization problem in the distributed setting where the elements to be combined are not centrally located but spread over a network. We address the key challenges of balancing communication costs and optimization errors. To this end, we propose a distributed Frank-Wolfe (dFW) algorithm...
متن کاملEfficient Image and Video Co-localization with Frank-Wolfe Algorithm
In this paper, we tackle the problem of performing efficient co-localization in images and videos. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images or videos. Building upon recent stateof-the-art methods, we show how we are able to naturally incorporate temporal terms and constraints for video co-localiza...
متن کاملEscaping the curse of dimensionality in estimating multivariate transfer entropy.
Multivariate transfer entropy (TE) is a model-free approach to detect causalities in multivariate time series. It is able to distinguish direct from indirect causality and common drivers without assuming any underlying model. But despite these advantages it has mostly been applied in a bivariate setting as it is hard to estimate reliably in high dimensions since its definition involves infinite...
متن کاملEscaping the Curse of Dimensionality with a Tree-based Regressor
We present the first tree-based regressor whose convergence rate depends only on the intrinsic dimension of the data, namely its Assouad dimension. The regressor uses the RPtree partitioning procedure, a simple randomized variant of k-d trees.
متن کاملLearning Infinite RBMs with Frank-Wolfe
In this work, we propose an infinite restricted Boltzmann machine (RBM), whose maximum likelihood estimation (MLE) corresponds to a constrained convex optimization. We consider the Frank-Wolfe algorithm to solve the program, which provides a sparse solution that can be interpreted as inserting a hidden unit at each iteration, so that the optimization process takes the form of a sequence of fini...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurocomputing
سال: 2019
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2018.12.060